I recently saw the most recent tutorial on the Machine Learning Series ("Neural Network Model - Deep Learning with Neural Networks and TensorFlow") and I came up with a rather simple solution to the "NN For loop Problem" as I will call it for the sake of this post.
Basically we need to create a for loop that goes through each layer that can have a number of nodes and we need to set it's weights and biases.
By putting the layers into a "Layers" dictionary it will be much easier to manipulate them and set their values than to have to manually write the code to do so (e.x hidden_1_layer, hidden_2_layer etc...).
Anyways here is the code:
import tensorflow as tf
input_data = 2 # It was 784 (28*28) on the tutorial Layers = {} # I will store all the layers to this dictionary n_classes = 10 # Number of classes (Number of output weights) topology = [input_data, 5, 5, 5, n_classes] # e.x [2,3,1] = 2 weights at InputLayer, 3 weights at Hidden_1_layer and 1 output weights on output Layer
# Go through the topology list and add a layer to the "Layers" dictionary with the amount of weights specified by the topology list. for i in range(1,len(topology)): Layer = {'weights':tf.Variable(tf.random_normal([topology[i-1], topology[i]])), 'biases':tf.Variable(tf.random_normal(topology[i]))}
In order to Sum the weights*hidden_layer_index and add the biases we can write a function that so that it is easier to write and then make a for loop to use that function to Sum the weights*hidden_layer_index and add the biases and then we can store the values (or is it a tensor? I am bit confused as of what exactly is a tensor) to a dictionary so that we keep things neat.
note: I am not an expert on python I mostly wright code in C++ but got into Python because I saw the Machine Learning series. I tested the code with a terminal and seems to work. I do not know if that is an ideal solution.
You must be logged in to post. Please login or register an account.
Off to a solid start for sure. Now what's left is the addition of a for loop for the network model in the computation graph (l1,l2,l3...etc from the code).
-Harrison 8 years ago
You must be logged in to post. Please login or register an account.
Alright so I didn't want to make things too messy by adding another for loop. So I added everything to the end of the for loop I had.
So the finished code for the
def neural_network_model
Looks like so:
import tensorflow as tf
input_data = 2 # It was 784 (28*28) on at the tutorial n_classes = 10 # Number of classes (Number of output weights) topology = [input_data, 5, 5, 5, n_classes] # e.x [2,3,1] = 2 weights at InputLayer, 3 weights at Hidden_1_layer and 1 output weights on output Layer
def neural_network_model(data): Layers = {} # I will store all the layers to this dictionary LCompute = {} # Layers for Computation Graph # Go through the topology list and add a layer to the "Layers" dictionary with the amount of weights specified by the topology list. for i in range(1,len(topology)): Layer = {'weights':tf.Variable(tf.random_normal([topology[i-1], topology[i]])), 'biases':tf.Variable(tf.random_normal(topology[i]))}
Layers[i-1] = Layer
#Code for Computation Graph
l = tf.add(tf.matmul(Layers[i-2] if i != 1 else data, Layers[i-1]['weights']) + Layers[i-1]['biases']) l = tf.nn.relu(l) if i != len(topology)-1 else l LCompute[i-1] = l